Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Arthroplasty ; 38(7 Suppl 2): S199-S207.e2, 2023 07.
Article in English | MEDLINE | ID: mdl-36858127

ABSTRACT

BACKGROUND: The postoperative follow-up of a patient after total knee arthroplasty (TKA) requires regular evaluation of the condition of the knee through interpretation of X-rays. This rigorous analysis requires expertize, time, and methodical standardization. Our work evaluated the use of an artificial intelligence tool, X-TKA, to assist surgeons in their interpretation. METHODS: A series of 12 convolutional neural networks were trained on a large database containing 39,751 X-ray images. These algorithms are able to determine examination quality, identify image characteristics, assess prosthesis sizing and positioning, measure knee-prosthesis alignment angles, and detect anomalies in the bone-cement-implant complex. The individual interpretations of a pool of senior surgeons with and without the assistance of X-TKA were evaluated on a reference dataset built in consensus by senior surgeons. RESULTS: The algorithms obtained a mean area under the curve value of 0.98 on the quality assurance and the image characteristics tasks. They reached a mean difference for the predicted angles of 1.71° (standard deviation, 1.53°), similar to the surgeon average difference of 1.69° (standard deviation, 1.52°). The comparative analysis showed that the assistance of X-TKA allowed surgeons to gain 5% in accuracy and 12% in sensitivity in the detection of interface anomalies. Moreover, this study demonstrated a gain in repeatability for each single surgeon (Light's kappa +0.17), as well as a gain in the reproducibility between surgeons (Light's kappa +0.1). CONCLUSION: This study highlights the benefit of using an intelligent artificial tool for a standardized interpretation of postoperative knee X-rays and indicates the potential for its use in clinical practice.


Subject(s)
Arthroplasty, Replacement, Knee , Knee Prosthesis , Osteoarthritis, Knee , Humans , Arthroplasty, Replacement, Knee/methods , Artificial Intelligence , Reproducibility of Results , Knee Joint/diagnostic imaging , Knee Joint/surgery , Osteoarthritis, Knee/diagnostic imaging , Osteoarthritis, Knee/surgery
2.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Article in English | MEDLINE | ID: mdl-36264729

ABSTRACT

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Subject(s)
Abdominal Cavity , Deep Learning , Humans , Algorithms , Brain/diagnostic imaging , Abdomen/diagnostic imaging , Image Processing, Computer-Assisted/methods
3.
IEEE/ACM Trans Comput Biol Bioinform ; 19(6): 3317-3331, 2022.
Article in English | MEDLINE | ID: mdl-34714749

ABSTRACT

Precision medicine is a paradigm shift in healthcare relying heavily on genomics data. However, the complexity of biological interactions, the large number of genes as well as the lack of comparisons on the analysis of data, remain a tremendous bottleneck regarding clinical adoption. In this paper, we introduce a novel, automatic and unsupervised framework to discover low-dimensional gene biomarkers. Our method is based on the LP-Stability algorithm, a high dimensional center-based unsupervised clustering algorithm. It offers modularity as concerns metric functions and scalability, while being able to automatically determine the best number of clusters. Our evaluation includes both mathematical and biological criteria to define a quantitative metric. The recovered signature is applied to a variety of biological tasks, including screening of biological pathways and functions, and characterization relevance on tumor types and subtypes. Quantitative comparisons among different distance metrics, commonly used clustering methods and a referential gene signature used in the literature, confirm state of the art performance of our approach. In particular, our signature, based on 27 genes, reports at least 30 times better mathematical significance (average Dunn's Index) and 25% better biological significance (average Enrichment in Protein-Protein Interaction) than those produced by other referential clustering methods. Finally, our signature reports promising results on distinguishing immune inflammatory and immune desert tumors, while reporting a high balanced accuracy of 92% on tumor types classification and averaged balanced accuracy of 68% on tumor subtypes classification, which represents, respectively 7% and 9% higher performance compared to the referential signature.


Subject(s)
Algorithms , Neoplasms , Humans , Cluster Analysis , Genomics , Pattern Recognition, Automated/methods , Neoplasms/genetics , Gene Expression Profiling/methods
4.
Sci Rep ; 10(1): 12340, 2020 07 23.
Article in English | MEDLINE | ID: mdl-32704007

ABSTRACT

Radiomics relies on the extraction of a wide variety of quantitative image-based features to provide decision support. Magnetic resonance imaging (MRI) contributes to the personalization of patient care but suffers from being highly dependent on acquisition and reconstruction parameters. Today, there are no guidelines regarding the optimal pre-processing of MR images in the context of radiomics, which is crucial for the generalization of published image-based signatures. This study aims to assess the impact of three different intensity normalization methods (Nyul, WhiteStripe, Z-Score) typically used in MRI together with two methods for intensity discretization (fixed bin size and fixed bin number). The impact of these methods was evaluated on first- and second-order radiomics features extracted from brain MRI, establishing a unified methodology for future radiomics studies. Two independent MRI datasets were used. The first one (DATASET1) included 20 institutional patients with WHO grade II and III gliomas who underwent post-contrast 3D axial T1-weighted (T1w-gd) and axial T2-weighted fluid attenuation inversion recovery (T2w-flair) sequences on two different MR devices (1.5 T and 3.0 T) with a 1-month delay. Jensen-Shannon divergence was used to compare pairs of intensity histograms before and after normalization. The stability of first-order and second-order features across the two acquisitions was analysed using the concordance correlation coefficient and the intra-class correlation coefficient. The second dataset (DATASET2) was extracted from the public TCIA database and included 108 patients with WHO grade II and III gliomas and 135 patients with WHO grade IV glioblastomas. The impact of normalization and discretization methods was evaluated based on a tumour grade classification task (balanced accuracy measurement) using five well-established machine learning algorithms. Intensity normalization highly improved the robustness of first-order features and the performances of subsequent classification models. For the T1w-gd sequence, the mean balanced accuracy for tumour grade classification was increased from 0.67 (95% CI 0.61-0.73) to 0.82 (95% CI 0.79-0.84, P = .006), 0.79 (95% CI 0.76-0.82, P = .021) and 0.82 (95% CI 0.80-0.85, P = .005), respectively, using the Nyul, WhiteStripe and Z-Score normalization methods compared to no normalization. The relative discretization makes unnecessary the use of intensity normalization for the second-order radiomics features. Even if the bin number for the discretization had a small impact on classification performances, a good compromise was obtained using the 32 bins considering both T1w-gd and T2w-flair sequences. No significant improvements in classification performances were observed using feature selection. A standardized pre-processing pipeline is proposed for the use of radiomics in MRI of brain tumours. For models based on first- and second-order features, we recommend normalizing images with the Z-Score method and adopting an absolute discretization approach. For second-order feature-based signatures, relative discretization can be used without prior normalization. In both cases, 32 bins for discretization are recommended. This study may pave the way for the multicentric development and validation of MR-based radiomics biomarkers.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Glioma/diagnostic imaging , Magnetic Resonance Imaging/standards , Female , Humans , Male , Middle Aged
5.
Int J Radiat Oncol Biol Phys ; 108(3): 813-823, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32417412

ABSTRACT

PURPOSE: This study aims to evaluate the impact of key parameters on the pseudo computed tomography (pCT) quality generated from magnetic resonance imaging (MRI) with a 3-dimensional (3D) convolutional neural network. METHODS AND MATERIALS: Four hundred two brain tumor cases were retrieved, yielding associations between 182 computed tomography (CT) and T1-weighted MRI (T1) scans, 180 CT and contrast-enhanced T1-weighted MRI (T1-Gd) scans, and 40 CT, T1, and T1-Gd scans. A 3D CNN was used to map T1 or T1-Gd onto CT scans and evaluate the importance of different components. First, the training set size's influence on testing set accuracy was assessed. Moreover, we evaluated the MRI sequence impact, using T1-only and T1-Gd-only cohorts. We then investigated 4 MRI standardization approaches (histogram-based, zero-mean/unit-variance, white stripe, and no standardization) based on training, validation, and testing cohorts composed of 242, 81, and 79 patients cases, respectively, as well as a bias field correction influence. Finally, 2 networks, namely HighResNet and 3D UNet, were compared to evaluate the architecture's impact on the pCT quality. The mean absolute error, gamma indices, and dose-volume histograms were used as evaluation metrics. RESULTS: Generating models using all the available cases for training led to higher pCT quality. The T1 and T1-Gd models had a maximum difference in gamma index means of 0.07 percentage point. The mean absolute error obtained with white stripe was 78 ± 22 Hounsfield units, which slightly outperformed histogram-based, zero-mean/unit-variance, and no standardization (P < .0001). Regarding the network architectures, 3%/3 mm gamma indices of 99.83% ± 0.19% and 99.74% ± 0.24% were obtained for HighResNet and 3D UNet, respectively. CONCLUSIONS: Our best pCTs were generated using more than 200 samples in the training data set. Training with T1 only and T1-Gd only did not significantly affect performance. Regardless of the preprocessing applied, the dosimetry quality remained equivalent and relevant for potential use in clinical practice.


Subject(s)
Brain Neoplasms/diagnostic imaging , Deep Learning , Magnetic Resonance Imaging/methods , Tomography, X-Ray Computed/methods , Brain/diagnostic imaging , Brain Neoplasms/radiotherapy , Contrast Media , Humans , Magnetic Resonance Imaging/standards , Neural Networks, Computer , Radiometry , Radiotherapy/standards , Retrospective Studies , Skull/diagnostic imaging
6.
Front Comput Neurosci ; 14: 17, 2020.
Article in English | MEDLINE | ID: mdl-32265680

ABSTRACT

Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.

SELECTION OF CITATIONS
SEARCH DETAIL
...